Skip to content

fix: Address PR #206 review feedback for OpenAI thinking mode#207

Closed
guunergooner wants to merge 2 commits intoclaude-code-best:mainfrom
guunergooner:fix/openai-thinking-review-feedback
Closed

fix: Address PR #206 review feedback for OpenAI thinking mode#207
guunergooner wants to merge 2 commits intoclaude-code-best:mainfrom
guunergooner:fix/openai-thinking-review-feedback

Conversation

@guunergooner
Copy link
Copy Markdown
Contributor

@guunergooner guunergooner commented Apr 8, 2026

Summary

Fixes review feedback from #206:

  • afterEach env var leak: Use delete instead of assigning undefined when original env var was undefined, preventing state leak between tests
  • Turn boundary detection for multimodal inputs: Treat non-text user inputs (e.g., images) as new turns, not just text content — fixes reasoning_content not being stripped for multimodal-only turns
  • Improved test coverage: Extract buildOpenAIRequestBody() from queryModelOpenAI for testability; replace constant-assertion tests with real function output verification

Test plan

  • All 94 OpenAI tests pass (0 fail)
  • thinking.test.ts: 36 tests pass (was 20, now covers request body params)
  • convertMessages.test.ts: 22 tests pass

Signed-off-by: guunergooner tongchao0923@gmail.com

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Added support for OpenAI thinking mode to enable extended reasoning during model processing.
    • Automatic thinking mode activation for DeepSeek reasoner models.
    • Configurable thinking mode via environment-based control.
  • Tests

    • Added test coverage for thinking mode functionality and multi-turn conversation behavior.

Support DeepSeek reasoning models (deepseek-reasoner and DeepSeek-V3.2)
in OpenAI-compatible API layer with automatic thinking mode detection.

Key changes:
- Add `isOpenAIThinkingEnabled()` for auto-detecting thinking-capable models
- Inject thinking parameters in request body (both official API and vLLM formats)
- Preserve reasoning_content in message conversion for tool call iterations
- Strip reasoning_content from previous turns to save bandwidth
- Add comprehensive test coverage for thinking mode detection

Thinking mode is enabled when:
1. OPENAI_ENABLE_THINKING=1 is set (explicit enable), OR
2. Model name contains "deepseek-reasoner" or "deepseek-v3.2" (auto-detect)

Signed-off-by: guunergooner <tongchao0923@gmail.com>
…ing mode

- Fix afterEach env var leak: use delete instead of assigning undefined
- Fix turn boundary detection: treat multimodal inputs (e.g. images)
  as new turns, not just text content
- Extract buildOpenAIRequestBody() for testability, replace constant-
  assertion tests with real function output verification

Signed-off-by: guunergooner <tongchao0923@gmail.com>
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 8, 2026

Caution

Review failed

Pull request was closed or merged during review

📝 Walkthrough

Walkthrough

This PR adds "thinking" mode support for OpenAI, particularly for DeepSeek reasoner models. It introduces helper functions to detect thinking enablement via environment variables or model name patterns, extends message conversion logic to preserve Anthropic thinking blocks as reasoning_content when thinking is enabled, and centralizes OpenAI request body construction with conditional thinking parameters.

Changes

Cohort / File(s) Summary
Test Coverage
src/services/api/openai/__tests__/convertMessages.test.ts, src/services/api/openai/__tests__/thinking.test.ts
New test suites validating isOpenAIThinkingEnabled() detection across environment variables and model names (DeepSeek reasoner/v3.2 variants), buildOpenAIRequestBody() with conditional thinking parameters, and anthropicMessagesToOpenAI() thinking block preservation/stripping behavior across message turns.
Message Conversion Logic
src/services/api/openai/convertMessages.ts
Added ConvertMessagesOptions interface with enableThinking flag; extended anthropicMessagesToOpenAI() to accept options parameter and compute turn boundaries; modified assistant message conversion to preserve thinking blocks as reasoning_content when thinking is enabled.
OpenAI Service Core
src/services/api/openai/index.ts
Added isOpenAIThinkingEnabled() helper for thinking mode detection via environment variable or model name auto-detection; added buildOpenAIRequestBody() helper that centralizes request construction with conditional DeepSeek thinking parameters; refactored queryModelOpenAI() to use new helpers.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Poem

🐰 Whiskers twitch with delight divine,
As thinking thoughts now intertwine—
DeepSeek reasoners find their voice,
Wrapped in reasoning, a thoughtful choice!
✨ Hop forth with contentment fine! 🌟

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 57.14% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'fix: Address PR #206 review feedback for OpenAI thinking mode' clearly and specifically describes the main change: addressing review feedback for OpenAI thinking mode implementation.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

@guunergooner
Copy link
Copy Markdown
Contributor Author

Changes merged into #206 as a single commit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant